Efficient Learning of Mahalanobis Metrics for Ranking
نویسندگان
چکیده
We develop an efficient algorithm to learn a Mahalanobis distance metric by directly optimizing a ranking loss. Our approach focuses on optimizing the top of the induced rankings, which is desirable in tasks such as visualization and nearestneighbor retrieval. We further develop and justify a simple technique to reduce training time significantly with minimal impact on performance. Our proposed method significantly outperforms alternative methods on several real-world tasks, and can scale to large and high-dimensional data.
منابع مشابه
Efficient Retrieval for Large Scale Metric Learning
In this paper, we address the problem of efficient k-NN classification. In particular, in the context of Mahalanobis metric learning. Mahalanobis metric learning recently demonstrated competitive results for a variety of tasks. However, such approaches have two main drawbacks. First, learning metrics requires often to solve complex and thus computationally very expensive optimization problems. ...
متن کاملLearning Distance Functions using Equivalence Relations
We address the problem of learning distance metrics using side-information in the form of groups of "similar" points. We propose to use the RCA algorithm, which is a simple and efficient algorithm for learning a full ranked Mahalanobis metric (Shental et al., 2002). We first show that RCA obtains the solution to an interesting optimization problem, founded on an information theoretic basis. If ...
متن کاملLEIBNIZ CENTER FOR RESEARCH IN COMPUTER SCIENCE TECHNICAL REPORT 2003-34 Learning a Mahalanobis Metric with Side Information
Many learning algorithms use a metric defined over the input space as a principal tool, and their performance critically depends on the quality of this metric. We address the problem of learning metrics using side-information in the form of equivalence constraints. Unlike labels, we demonstrate that this type of side-information can sometimes be automatically obtained without the need of human ...
متن کاملAn investigation on scaling parameter and distance metrics in semi-supervised Fuzzy c-means
The scaling parameter α helps maintain a balance between supervised and unsupervised learning in semi-supervised Fuzzy c-Means (ssFCM). In this study, we investigated the effects of different α values, 0.1, 0.5, 1 and 10 in Pedrycz and Waletsky’s ssFCM with various amounts of labelled data, 10%, 20%, 30%, 40%, 50% and 60% and three distance metrics, Euclidean, Mahalanobis and kernel-based on th...
متن کاملmetricDTW: local distance metric learning in Dynamic Time Warping
We propose to learn multiple local Mahalanobis distance metrics to perform knearest neighbor (kNN) classification of temporal sequences. Temporal sequences are first aligned by dynamic time warping (DTW); given the alignment path, similarity between two sequences is measured by the DTW distance, which is computed as the accumulated distance between matched temporal point pairs along the alignme...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014